232 research outputs found

    Multi-score Learning for Affect Recognition: the Case of Body Postures

    Get PDF
    An important challenge in building automatic affective state recognition systems is establishing the ground truth. When the groundtruth is not available, observers are often used to label training and testing sets. Unfortunately, inter-rater reliability between observers tends to vary from fair to moderate when dealing with naturalistic expressions. Nevertheless, the most common approach used is to label each expression with the most frequent label assigned by the observers to that expression. In this paper, we propose a general pattern recognition framework that takes into account the variability between observers for automatic affect recognition. This leads to what we term a multi-score learning problem in which a single expression is associated with multiple values representing the scores of each available emotion label. We also propose several performance measurements and pattern recognition methods for this framework, and report the experimental results obtained when testing and comparing these methods on two affective posture datasets

    Automatic Recognition of Affective Body Movement in a Video Game Scenario

    Full text link
    This study aims at recognizing the affective states of players from non-acted, non-repeated body movements in the context of a video game scenario. A motion capture system was used to collect the movements of the participants while playing a Nintendo Wii tennis game. Then, a combination of body movement features along with a machine learning technique was used in order to automatically recognize emotional states from body movements. Our system was then tested for its ability to generalize to new participants and to new body motion data using a sub-sampling validation technique. To train and evaluate our system, online evaluation surveys were created using the body movements collected from the motion capture system and human observers were recruited to classify them into affective categories. The results showed that observer agreement levels are above chance level and the automatic recognition system achieved recognition rates comparable to the observers' benchmark. © 2012 ICST Institute for Computer Science, Social Informatics and Telecommunications Engineering

    What does touch tell us about emotions in touchscreen-based gameplay?

    Get PDF
    This is the post-print version of the Article. The official published version can be accessed from the link below - Copyright @ 2012 ACM. It is posted here by permission of ACM for your personal use. Not for redistribution.Nowadays, more and more people play games on touch-screen mobile phones. This phenomenon raises a very interesting question: does touch behaviour reflect the player’s emotional state? If possible, this would not only be a valuable evaluation indicator for game designers, but also for real-time personalization of the game experience. Psychology studies on acted touch behaviour show the existence of discriminative affective profiles. In this paper, finger-stroke features during gameplay on an iPod were extracted and their discriminative power analysed. Based on touch-behaviour, machine learning algorithms were used to build systems for automatically discriminating between four emotional states (Excited, Relaxed, Frustrated, Bored), two levels of arousal and two levels of valence. The results were very interesting reaching between 69% and 77% of correct discrimination between the four emotional states. Higher results (~89%) were obtained for discriminating between two levels of arousal and two levels of valence

    A new multi-modal dataset for human affect analysis

    Get PDF
    In this paper we present a new multi-modal dataset of spontaneous three way human interactions. Participants were recorded in an unconstrained environment at various locations during a sequence of debates in a video conference, Skype style arrangement. An additional depth modality was introduced, which permitted the capture of 3D information in addition to the video and audio signals. The dataset consists of 16 participants and is subdivided into 6 unique sections. The dataset was manually annotated on a continuously scale across 5 different affective dimensions including arousal, valence, agreement, content and interest. The annotation was performed by three human annotators with the ensemble average calculated for use in the dataset. The corpus enables the analysis of human affect during conversations in a real life scenario. We first briefly reviewed the existing affect dataset and the methodologies related to affect dataset construction, then we detailed how our unique dataset was constructed

    Nuclear protein kinase activities during the cell cycle of HeLa S3 cells

    Full text link
    To ascertain the activity and substrate specificity of nuclear protein kinases during various stages of the cell cycle of HeLa S3 cells, a nuclear phosphoprotein-enriched sample was extracted from synchronised cells and assayed in vitro in the presence of homologous substrates. The nuclear protein kinases increased in activity during S and G2 phase to a level that was twice that of kinases from early S phase cells. The activity was reduced during mitosis but increased again in G1 phase. When the phosphoproteins were separated into five fractions by cellulose-phosphate chromatography each fraction, though not homogenous, exhibited differences in activity. Variations in the activity of the protein kinase fractions were observed during the cell cycle, similar to those observed for the unfractionated kinases.Sodium dodecyl sulfate polyacrylamide gel electrophoretic analysis of the proteins phosphorylated by each of the five kinase fractions demonstrated a substrate specificity. The fractions also exhibited some cell cycle stage-specific preference for substrates; kinases from G1 cells phosphorylated mainly high molecular weight polypeptides, whereas lower molecular weight species were phosphorylated by kinases from the S, G2 and mitotic stages of the cell cycle. Inhibition of DNA and histone synthesis by cytosine arabinoside had no effect on the activity or substrate specificity of S phase kinases. Some kinase fractions phosphorylated histones as well as non-histone chromosomal proteins and this phosphorylation was also cell cycle stage dependent. The presence of histones in the in vitro assay influenced the ability of some fractions to phosphorylate particular non-histone polypeptides; non-histone proteins also appeared to affect the in vitro phosphorylation of histones.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/23438/1/0000387.pd

    Characterization of five members of the actin gene family in the sea urchin

    Full text link
    Hybridization of an actin cDNA clone (pSA38) to restriction enzyme digests of Strongylocentrotus purpuratus DNA indicates that the sea urchin genome contains at least five different actin genes. A sea urchin genomic clone library was screened for recombinants which hydridize to pSA38 and four genomic clones were isolated. Restriction maps were generated which indicate that three of these recombinants contain different actin genes, and that the fourth may be an allele to one of these. The restriction maps suggest that one clone contains two linked actin genes. This fact, which was confirmed by heteroduplex analysis, indicates that the actin gene family may be clustered. The linked genes are oriented in the same direction and spaced about 8.0 kilobases apart. In heteroduplexes between genomic clones two intervening sequences were seen. Significant homology is confined to the actin coding region and does not include any flanking sequence. Southern blot analysis reveals that repetitive DNA sequences are found in the region of the actin genes.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/24164/1/0000422.pd

    Bodily Expression of Social Initiation Behaviors in ASC and non-ASC children: Mixed Reality vs. LEGO Game Play

    Get PDF
    This study is part of a larger project that showed the potential of our mixed reality (MR) system in fostering social initiation behaviors in children with Autism Spectrum Condition (ASC). We compared it to a typical social intervention strategy based on construction tools, where both mediated a face-to-face dyadic play session between an ASC child and a non-ASC child. In this study, our first goal is to show that an MR platform can be utilized to alter the nonverbal body behavior between ASC and non-ASC during social interaction as much as a traditional therapy setting (LEGO). A second goal is to show how these body cues differ between ASC and non-ASC children during social initiation in these two platforms. We present our first analysis of the body cues generated under two conditions in a repeated-measures design. Body cue measurements were obtained through skeleton information and characterized in the form of spatio-temporal features from both subjects individually (e.g. distances between joints and velocities of joints), and interpersonally (e.g. proximity and visual focus of attention). We used machine learning techniques to analyze the visual data of eighteen trials of ASC and non-ASC dyads. Our experiments showed that: (i) there were differences between ASC and non-ASC bodily expressions, both at individual and interpersonal level, in LEGO and in the MR system during social initiation; (ii) the number of features indicating differences between ASC and non-ASC in terms of nonverbal behavior during initiation were higher in the MR system as compared to LEGO; and (iii) computational models evaluated with combination of these different features enabled the recognition of social initiation type (ASC or non-ASC) from body features in LEGO and in MR settings. We did not observe significant differences between the evaluated models in terms of performance for LEGO and MR environments. This might be interpreted as the MR system encouraging similar nonverbal behaviors in children, perhaps more similar than the LEGO environment, as the performance scores in the MR setting are lower as compared to the LEGO setting. These results demonstrate the potential benefits of full body interaction and MR settings for children with ASC.EPSR

    Take an Emotion Walk: Perceiving Emotions from Gaits Using Hierarchical Attention Pooling and Affective Mapping

    Full text link
    We present an autoencoder-based semi-supervised approach to classify perceived human emotions from walking styles obtained from videos or motion-captured data and represented as sequences of 3D poses. Given the motion on each joint in the pose at each time step extracted from 3D pose sequences, we hierarchically pool these joint motions in a bottom-up manner in the encoder, following the kinematic chains in the human body. We also constrain the latent embeddings of the encoder to contain the space of psychologically-motivated affective features underlying the gaits. We train the decoder to reconstruct the motions per joint per time step in a top-down manner from the latent embeddings. For the annotated data, we also train a classifier to map the latent embeddings to emotion labels. Our semi-supervised approach achieves a mean average precision of 0.84 on the Emotion-Gait benchmark dataset, which contains both labeled and unlabeled gaits collected from multiple sources. We outperform current state-of-art algorithms for both emotion recognition and action recognition from 3D gaits by 7%--23% on the absolute. More importantly, we improve the average precision by 10%--50% on the absolute on classes that each makes up less than 25% of the labeled part of the Emotion-Gait benchmark dataset.Comment: In proceedings of the 16th European Conference on Computer Vision, 2020. Total pages 18. Total figures 5. Total tables
    corecore